1,107 research outputs found

    Nonparametric estimates of low bias

    Full text link
    We consider the problem of estimating an arbitrary smooth functional of kβ‰₯1k \geq 1 distribution functions (d.f.s.) in terms of random samples from them. The natural estimate replaces the d.f.s by their empirical d.f.s. Its bias is generally ∼nβˆ’1\sim n^{-1}, where nn is the minimum sample size, with a {\it ppth order} iterative estimate of bias ∼nβˆ’p \sim n^{-p} for any pp. For p≀4p \leq 4, we give an explicit estimate in terms of the first 2pβˆ’22p - 2 von Mises derivatives of the functional evaluated at the empirical d.f.s. These may be used to obtain {\it unbiased} estimates, where these exist and are of known form in terms of the sample sizes; our form for such unbiased estimates is much simpler than that obtained using polykays and tables of the symmetric functions. Examples include functions of a mean vector (such as the ratio of two means and the inverse of a mean), standard deviation, correlation, return times and exceedances. These ppth order estimates require only ∼n\sim n calculations. This is in sharp contrast with computationally intensive bias reduction methods such as the ppth order bootstrap and jackknife, which require ∼np\sim n^p calculations

    Expansions about the gamma for the distribution and quantiles of a standard estimate

    Full text link
    We give expansions for the distribution, density, and quantiles of an estimate, building on results of Cornish, Fisher, Hill, Davis and the authors. The estimate is assumed to be non-lattice with the standard expansions for its cumulants. By expanding about a skew variable with matched skewness, one can drastically reduce the number of terms needed for a given level of accuracy. The building blocks generalize the Hermite polynomials. We demonstrate with expansions about the gamma

    The distribution of the maximum of a second order autoregressive process: the continuous case

    Full text link
    We give the distribution function of MnM_n, the maximum of a sequence of nn observations from an autoregressive process of order 2. Solutions are first given in terms of repeated integrals and then for the case, where the underlying random variables are absolutely continuous. When the correlations are positive, P(M_n \leq x) =a_{n,x}, where a_{n,x}= \sum_{j=1}^\infty \beta_{jx} \nu_{jx}^{n} = O (\nu_{1x}^{n}), where {Ξ½jx}\{\nu_{jx}\} are the eigenvalues of a non-symmetric Fredholm kernel, and Ξ½1x\nu_{1x} is the eigenvalue of maximum magnitude. The weights Ξ²jx\beta_{jx} depend on the jjth left and right eigenfunctions of the kernel. These results are large deviations expansions for estimates, since the maximum need not be standardized to have a limit. In fact such a limit need not exist.Comment: 8 pages This version removes an inappropriate not

    The distribution of the maximum of an ARMA(1, 1) process

    Full text link
    We give the cumulative distribution function of MnM_n, the maximum of a sequence of nn observations from an ARMA(1, 1) process. Solutions are first given in terms of repeated integrals and then for the case, where the underlying random variables are absolutely continuous. The distribution of MnM_n is then given as a weighted sum of the nnth powers of the eigenvalues of a non-symmetric Fredholm kernel. The weights are given in terms of the left and right eigenfunctions of the kernel. These results are large deviations expansions for estimates, since the maximum need not be standardized to have a limit. In fact, such a limit need not exist.Comment: arXiv admin note: text overlap with arXiv:1001.526

    Accurate inference for a one parameter distribution based on the mean of a transformed sample

    Full text link
    A great deal of inference in statistics is based on making the approximation that a statistic is normally distributed. The error in doing so is generally O(nβˆ’1/2)O(n^{-1/2}) and can be very considerable when the distribution is heavily biased or skew. This note shows how one may reduce this error to O(nβˆ’(j+1)/2)O(n^{-(j+1)/2}), where jj is a given integer. The case considered is when the statistic is the mean of the sample values from a continuous one-parameter distribution, after the sample has undergone an initial transformation

    The chain rule for functionals with applications to functions of moments

    Full text link
    The chain rule for derivatives of a function of a function is extended to a function of a statistical functional, and applied to obtain approximations to the cumulants, distribution and quantiles of functions of sample moments, and so to obtain third order confidence intervals and estimates of reduced bias for functions of moments. As an example we give the distribution of the standardized skewness for a normal sample to magnitude O(nβˆ’2)O(n^{-2}), where nn is the sample size

    The distribution and quantiles of functionals of weighted empirical distributions when observations have different distributions

    Full text link
    This paper extends Edgeworth-Cornish-Fisher expansions for the distribution and quantiles of nonparametric estimates in two ways. Firstly it allows observations to have different distributions. Secondly it allows the observations to be weighted in a predetermined way. The use of weighted estimates has a long history including applications to regression, rank statistics and Bayes theory. However, asymptotic results have generally been only first order (the CLT and weak convergence). We give third order asymptotics for the distribution and percentiles of any smooth functional of a weighted empirical distribution, thus allowing a considerable increase in accuracy over earlier CLT results. Consider independent non-identically distributed ({\it non-iid}) observations X1n,...,XnnX_{1n}, ..., X_{nn} in RsR^s. Let F^(x)\hat{F}(x) be their {\it weighted empirical distribution} with weights w1n,...,wnnw_{1n}, ..., w_{nn}. We obtain cumulant expansions and hence Edgeworth-Cornish-Fisher expansions for T(F^)T(\hat{F}) for any smooth functional T(β‹…)T(\cdot) by extending the concepts of von Mises derivatives to signed measures of total measure 1. As an example we give the cumulant coefficients needed for Edgeworth-Cornish-Fisher expansions to O(nβˆ’3/2)O(n^{-3/2}) for the sample variance when observations are non-iid

    Expansions for Quantiles and Multivariate Moments of Extremes for Distributions of Pareto Type

    Full text link
    Let XnrX_{nr} be the rrth largest of a random sample of size nn from a distribution F(x)=1βˆ’βˆ‘i=0∞cixβˆ’Ξ±βˆ’iΞ²F (x) = 1 - \sum_{i = 0}^\infty c_i x^{-\alpha - i \beta} for Ξ±>0\alpha > 0 and Ξ²>0\beta > 0. An inversion theorem is proved and used to derive an expansion for the quantile Fβˆ’1(u)F^{-1} (u) and powers of it. From this an expansion in powers of (nβˆ’1,nβˆ’Ξ²/Ξ±)(n^{-1}, n^{-\beta/\alpha}) is given for the multivariate moments of the extremes {Xn,nβˆ’si,1≀i≀k}/n1/Ξ±\{X_{n, n - s_i}, 1 \leq i \leq k \}/n^{1/\alpha} for fixed s=(s1,...,sk){\bf s} = (s_1, ..., s_k), where kβ‰₯1k \geq 1. Examples include the Cauchy, Student tt, FF, second extreme distributions and stable laws of index Ξ±<1\alpha < 1

    Fredholm equations for non-symmetric kernels, with applications to iterated integral operators

    Full text link
    We give the Jordan form and the Singular Value Decomposition for an integral operator N{\cal N} with a non-symmetric kernel N(y,z)N(y,z). This is used to give solutions of Fredholm equations for non-symmetric kernels, and to determine the behaviour of Nn{\cal N}^n and (NNβˆ—)n({\cal N}{\cal N^*})^n for large nn.Comment: 12 A4 page

    The distribution of the maximum of a first order moving average: the continuous case

    Full text link
    We give the distribution of MnM_n, the maximum of a sequence of nn observations from a moving average of order 1. Solutions are first given in terms of repeated integrals and then for the case where the underlying independent random variables have an absolutely continuous density. When the correlation is positive, P(Mnβˆ‘j=1∞βjxΞ½jxnβ‰ˆBxΞ½1xn P(M_n %\max^n_{i=1} X_i \leq x) =\ \sum_{j=1}^\infty \beta_{jx} \nu_{jx}^{n} \approx B_{x} \nu_{1x}^{n} where %{Xi}\{X_i\} is a moving average of order 1 with positive correlation, and {Ξ½jx}\{\nu_{jx}\} are the eigenvalues (singular values) of a Fredholm kernel and Ξ½1x\nu_{1x} is the eigenvalue of maximum magnitude. A similar result is given when the correlation is negative. The result is analogous to large deviations expansions for estimates, since the maximum need not be standardized to have a limit. % there are more terms, and P(Mn<x)β‰ˆBxβ€²Β (1+Ξ½1x)n.P(M_n <x) \approx B'_{x}\ (1+\nu_{1x})^n. For the continuous case the integral equations for the left and right eigenfunctions are converted to first order linear differential equations. The eigenvalues satisfy an equation of the form βˆ‘i=1∞wi(Ξ»βˆ’ΞΈi)βˆ’1=Ξ»βˆ’ΞΈ0\sum_{i=1}^\infty w_i(\lambda-\theta_i)^{-1}=\lambda-\theta_0 for certain known weights {wi}\{w_i\} and eigenvalues {ΞΈi}\{\theta_i\} of a given matrix. This can be solved by truncating the sum to an increasing number of terms.Comment: 15 A4 pages. Version 4 corrects (3.8). Version 3 expands Section 2. Version 2 corrected recurrence relation (2.5
    • …
    corecore